Most existing scene text detectors require large-scale training data which cannot scale well due to two major factors: 1) scene text images often have domain-specific distributions; 2) collecting large-scale annotated scene text images is laborious. We study domain adaptive scene text detection, a largely neglected yet very meaningful task that aims for optimal transfer of labelled scene text images while handling unlabelled images in various new domains. Specifically, we design SCAST, a subcategory-aware self-training technique that mitigates the network overfitting and noisy pseudo labels in domain adaptive scene text detection effectively. SCAST consists of two novel designs. For labelled source data, it introduces pseudo subcategories for both foreground texts and background stuff which helps train more generalizable source models with multi-class detection objectives. For unlabelled target data, it mitigates the network overfitting by co-regularizing the binary and subcategory classifiers trained in the source domain. Extensive experiments show that SCAST achieves superior detection performance consistently across multiple public benchmarks, and it also generalizes well to other domain adaptive detection tasks such as vehicle detection.
translated by 谷歌翻译
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows explicitly associating 3D and semantic-rich captions. Further, to facilitate coarse-to-fine visual-semantic representation learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and 14.5% $\sim$ 50.4% hAP$_{50}$ on open-vocabulary semantic and instance segmentation, but also shows robust transferability on challenging zero-shot domain transfer tasks. Code will be available at https://github.com/CVMI-Lab/PLA.
translated by 谷歌翻译
由于其在多语言翻译,自动驾驶等方面的广泛应用,因此场景文本识别引起了近年来的兴趣。在本报告中,我们描述了我们对词汇表场上的解决方案的解决方案,该解决方案是词汇表场上的文本理解(OOV-ST)挑战,旨在从自然场景图像中提取胶卷外(OOV)单词。我们基于OCLIP的模型在H-Mean中获得28.59 \%,在ECCV2022 TIE Workshop中对OOV挑战的端到端OOV单词识别曲目排名第一。
translated by 谷歌翻译
该报告介绍了我们对ECCV 2022挑战的获胜者解决方案,挑战了播放视频的文本理解(OOV-ST):裁剪单词识别。这项挑战是在所有内容(TIE)中的ECCV 2022讲习班的背景下举行的,该研讨会(TIE)旨在从自然场景图像中提取出播出的单词。在竞争中,我们首先在合成数据集上进行预训练,然后在训练集中对模型进行数据增强进行微调。同时,针对长期和垂直文本进行了专门训练的另外两个型号。最后,我们将不同模型的输出与不同的层,不同的骨干和不同种子结合在一起。当考虑使用唱歌内和播放量的单词时,我们的解决方案的总体单词准确性为69.73%。
translated by 谷歌翻译
大多数现有场景文本检测器都集中于检测字符或单词,这些字符或单词仅由于缺少上下文信息而捕获部分文本消息。为了更好地理解场景中的文本,更需要检测上下文文本块(CTB),该文本块由一个或多个积分文本单元(例如,字符,单词或短语)组成,自然阅读顺序并传输某些完整的文本消息。本文介绍了上下文文本检测,这是一种检测CTB的新设置,以更好地理解场景中的文本。我们通过双重检测任务制定新设置,该任务首先检测积分文本单元,然后将其分组为CTB。为此,我们设计了一种新颖的场景文本群集技术,将整体文本单元视为令牌,并将它们(属于同一CTB)分组为有序的令牌序列。此外,我们创建了两个数据集Scut-ctw-context和rects-context,以促进未来的研究,其中每个CTB都由有序的积分文本单元很好地注释。此外,我们介绍了三个指标,这些指标以局部准确性,连续性和全球准确性来衡量上下文文本检测。广泛的实验表明,我们的方法准确地检测到CTB,这些CTB有效地促进了下游任务,例如文本分类和翻译。该项目可在https://sg-vilab.github.io/publication/xue20222contextual/上获得。
translated by 谷歌翻译
最近,视觉预训练(VLP)技术通过共同学习视觉和文本表示,从而极大地使各种视力语言任务受益匪浅,这是由于场景中文本中丰富的视觉和文本信息而直觉上有助于光学角色识别(OCR)任务图片。但是,这些方法无法很好地应对OCR任务,因为实例级文本编码和图像文本对采集的难度(即其中的图像和捕获的文本)。本文提出了一种弱监督的预训练方法OCLIP,可以通过共同学习和对齐视觉和文本信息来获取有效的场景文本表示。我们的网络由一个图像编码器和角色吸引的文本编码器组成,该文本编码器分别提取视觉和文本特征,以及一个视觉文本解码器,该解码器模拟了文本和视觉特征之间的相互作用,以学习有效的场景文本表示。通过学习文本功能,预先训练的模型可以通过角色意识很好地参加图像中的文本。此外,这些设计可以从弱注释的文本(即图像中的部分文本中没有文本边界框中的部分文本)进行学习,从而极大地减轻数据注释约束。 ICDAR2019-LSVT中弱注释图像的实验表明,我们的预训练模型分别将其权重转移到其他文本检测和发现网络时,将F-评分提高+2.5 \%和+4.8 \%。此外,所提出的方法在多个公共数据集(例如,总文本和CTW1500的+3.2 \%和+1.3 \%)上始终超过现有的预训练技术。
translated by 谷歌翻译
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via `sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译